Efficient and effective algorithms for training single-hidden-layer neural networks
نویسندگان
چکیده
Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training. 1 Corresponding author, phone: 425-707-9282; fax: 425-936-7329. Efficient and Effective Algorithms for Training Single-Hidden-Layer Neural Networks Submitted to Pattern Recognition Letters, March 2011 2 Keywords—neural network, extreme learning machine, accelerated gradient algorithm, weighted algorithm, MNIST
منابع مشابه
Prediction of breeding values for the milk production trait in Iranian Holstein cows applying artificial neural networks
The artificial neural networks, the learning algorithms and mathematical models mimicking the information processing ability of human brain can be used non-linear and complex data. The aim of this study was to predict the breeding values for milk production trait in Iranian Holstein cows applying artificial neural networks. Data on 35167 Iranian Holstein cows recorded between 1998 to 2009 were ...
متن کاملEvaluation of effects of operating parameters on combustible material recovery in coking coal flotation process using artificial neural networks
In this research work, the effects of flotation parameters on coking coal flotation combustible material recovery (CMR) were studied by the artificial neural networks (ANNs) method. The input parameters of the network were the pulp solid weight content, pH, collector dosage, frother dosage, conditioning time, flotation retention time, feed ash content, and rotor rotation speed. In order to sele...
متن کاملDesigning an expert system for differential diagnosis of β-Thalassemia minor and Iron-Deficiency anemia using neural network
Introduction: Artificial neural networks are a type of systems that use very complex technologies and non-algorithmic solutions for problem solving. These characteristics make them suitable for various medical applications. This study set out to investigate the application of artificial neural networks for differential diagnosis of thalassemia minor and iron-deficiency anemia. Methods: It is...
متن کاملروش پیشتعلیم سریع بر مبنای کمینهسازی خطا برای همگرائی یادگیری شبکههای عصبی با ساختار عمیق
In this paper, we propose efficient method for pre-training of deep bottleneck neural network (DBNN). Pre-training is used for initial value of network weights convergence of DBNN is difficult because of different local minimums. While with efficient initial value for network weights can avoided some local minimums. This method divides DBNN to multi single hidden layer and adjusts them, then we...
متن کاملWavelet Neural Network with Random Wavelet Function Parameters
The training algorithm of Wavelet Neural Networks (WNN) is a bottleneck which impacts on the accuracy of the final WNN model. Several methods have been proposed for training the WNNs. From the perspective of our research, most of these algorithms are iterative and need to adjust all the parameters of WNN. This paper proposes a one-step learning method which changes the weights between hidden la...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Pattern Recognition Letters
دوره 33 شماره
صفحات -
تاریخ انتشار 2012